133 research outputs found
Invertible Orientation Scores of 3D Images
The enhancement and detection of elongated structures in noisy image data is
relevant for many biomedical applications. To handle complex crossing
structures in 2D images, 2D orientation scores were introduced, which already
showed their use in a variety of applications. Here we extend this work to 3D
orientation scores. First, we construct the orientation score from a given
dataset, which is achieved by an invertible coherent state type of transform.
For this transformation we introduce 3D versions of the 2D cake-wavelets, which
are complex wavelets that can simultaneously detect oriented structures and
oriented edges. For efficient implementation of the different steps in the
wavelet creation we use a spherical harmonic transform. Finally, we show some
first results of practical applications of 3D orientation scores.Comment: ssvm 2015 published version in LNCS contains a mistake (a switch
notation spherical angles) that is corrected in this arxiv versio
Left-invariant Stochastic Evolution Equations on SE(2) and its Applications to Contour Enhancement and Contour Completion via Invertible Orientation Scores
We provide the explicit solutions of linear, left-invariant,
(convection)-diffusion equations and the corresponding resolvent equations on
the 2D-Euclidean motion group SE(2). These diffusion equations are forward
Kolmogorov equations for stochastic processes for contour enhancement and
completion. The solutions are group-convolutions with the corresponding Green's
function, which we derive in explicit form. We mainly focus on the Kolmogorov
equations for contour enhancement processes which, in contrast to the
Kolmogorov equations for contour completion, do not include convection. The
Green's functions of these left-invariant partial differential equations
coincide with the heat-kernels on SE(2), which we explicitly derive. Then we
compute completion distributions on SE(2) which are the product of a forward
and a backward resolvent evolved from resp. source and sink distribution on
SE(2). On the one hand, the modes of Mumford's direction process for contour
completion coincide with elastica curves minimizing , related to zero-crossings of 2 left-invariant derivatives of the
completion distribution. On the other hand, the completion measure for the
contour enhancement concentrates on geodesics minimizing . This motivates a comparison between geodesics and elastica,
which are quite similar. However, we derive more practical analytic solutions
for the geodesics. The theory is motivated by medical image analysis
applications where enhancement of elongated structures in noisy images is
required. We use left-invariant (non)-linear evolution processes for automated
contour enhancement on invertible orientation scores, obtained from an image by
means of a special type of unitary wavelet transform
A PDE Approach to Data-driven Sub-Riemannian Geodesics in SE(2)
We present a new flexible wavefront propagation algorithm for the boundary
value problem for sub-Riemannian (SR) geodesics in the roto-translation group
with a metric tensor depending on a smooth
external cost , , computed from
image data. The method consists of a first step where a SR-distance map is
computed as a viscosity solution of a Hamilton-Jacobi-Bellman (HJB) system
derived via Pontryagin's Maximum Principle (PMP). Subsequent backward
integration, again relying on PMP, gives the SR-geodesics. For
we show that our method produces the global minimizers. Comparison with exact
solutions shows a remarkable accuracy of the SR-spheres and the SR-geodesics.
We present numerical computations of Maxwell points and cusp points, which we
again verify for the uniform cost case . Regarding image
analysis applications, tracking of elongated structures in retinal and
synthetic images show that our line tracking generically deals with crossings.
We show the benefits of including the sub-Riemannian geometry.Comment: Extended version of SSVM 2015 conference article "Data-driven
Sub-Riemannian Geodesics in SE(2)
Numerical Approaches for Linear Left-invariant Diffusions on SE(2), their Comparison to Exact Solutions, and their Applications in Retinal Imaging
Left-invariant PDE-evolutions on the roto-translation group (and
their resolvent equations) have been widely studied in the fields of cortical
modeling and image analysis. They include hypo-elliptic diffusion (for contour
enhancement) proposed by Citti & Sarti, and Petitot, and they include the
direction process (for contour completion) proposed by Mumford. This paper
presents a thorough study and comparison of the many numerical approaches,
which, remarkably, is missing in the literature. Existing numerical approaches
can be classified into 3 categories: Finite difference methods, Fourier based
methods (equivalent to -Fourier methods), and stochastic methods (Monte
Carlo simulations). There are also 3 types of exact solutions to the
PDE-evolutions that were derived explicitly (in the spatial Fourier domain) in
previous works by Duits and van Almsick in 2005. Here we provide an overview of
these 3 types of exact solutions and explain how they relate to each of the 3
numerical approaches. We compute relative errors of all numerical approaches to
the exact solutions, and the Fourier based methods show us the best performance
with smallest relative errors. We also provide an improvement of Mathematica
algorithms for evaluating Mathieu-functions, crucial in implementations of the
exact solutions. Furthermore, we include an asymptotical analysis of the
singularities within the kernels and we propose a probabilistic extension of
underlying stochastic processes that overcomes the singular behavior in the
origin of time-integrated kernels. Finally, we show retinal imaging
applications of combining left-invariant PDE-evolutions with invertible
orientation scores.Comment: A final and corrected version of the manuscript is Published in
Numerical Mathematics: Theory, Methods and Applications (NM-TMA), vol. (9),
p.1-50, 201
PDE-based Group Equivariant Convolutional Neural Networks
We present a PDE-based framework that generalizes Group equivariant
Convolutional Neural Networks (G-CNNs). In this framework, a network layer is
seen as a set of PDE-solvers where geometrically meaningful PDE-coefficients
become the layer's trainable weights. Formulating our PDEs on homogeneous
spaces allows these networks to be designed with built-in symmetries such as
rotation in addition to the standard translation equivariance of CNNs.
Having all the desired symmetries included in the design obviates the need to
include them by means of costly techniques such as data augmentation. We will
discuss our PDE-based G-CNNs (PDE-G-CNNs) in a general homogeneous space
setting while also going into the specifics of our primary case of interest:
roto-translation equivariance.
We solve the PDE of interest by a combination of linear group convolutions
and non-linear morphological group convolutions with analytic kernel
approximations that we underpin with formal theorems. Our kernel approximations
allow for fast GPU-implementation of the PDE-solvers, we release our
implementation with this article in the form of the LieTorch extension to
PyTorch, available at https://gitlab.com/bsmetsjr/lietorch . Just like for
linear convolution a morphological convolution is specified by a kernel that we
train in our PDE-G-CNNs. In PDE-G-CNNs we do not use non-linearities such as
max/min-pooling and ReLUs as they are already subsumed by morphological
convolutions.
We present a set of experiments to demonstrate the strength of the proposed
PDE-G-CNNs in increasing the performance of deep learning based imaging
applications with far fewer parameters than traditional CNNs.Comment: 27 pages, 18 figures. v2 changes: - mentioned KerCNNs - added section
Generalization of G-CNNs - clarification that the experiments utilized
automatic differentiation and SGD. v3 changes: - streamlined theoretical
framework - formulation and proof Thm.1 & 2 - expanded experiments. v4
changes: typos in Prop.5 and (20) v5/6 changes: minor revisio
Total Variation and Mean Curvature PDEs on
Total variation regularization and total variation flows (TVF) have been
widely applied for image enhancement and denoising. To include a generic
preservation of crossing curvilinear structures in TVF we lift images to the
homogeneous space of positions and
orientations as a Lie group quotient in SE(d). For d = 2 this is called 'total
roto-translation variation' by Chambolle & Pock. We extend this to d = 3, by a
PDE-approach with a limiting procedure for which we prove convergence. We also
include a Mean Curvature Flow (MCF) in our PDE model on M. This was first
proposed for d = 2 by Citti et al. and we extend this to d = 3. Furthermore,
for d = 2 we take advantage of locally optimal differential frames in
invertible orientation scores (OS). We apply our TVF and MCF in the
denoising/enhancement of crossing fiber bundles in DW-MRI. In comparison to
data-driven diffusions, we see a better preservation of bundle boundaries and
angular sharpness in fiber orientation densities at crossings. We support this
by error comparisons on a noisy DW-MRI phantom. We also apply our TVF and MCF
in enhancement of crossing elongated structures in 2D images via OS, and
compare the results to nonlinear diffusions (CED-OS) via OS.Comment: Submission to the Seventh International Conference on Scale Space and
Variational Methods in Computer Vision (SSVM 2019). (v2) Typo correction in
lemma 1. (v3) Typo correction last paragraph page
Analysis of (sub-)Riemannian PDE-G-CNNs
Group equivariant convolutional neural networks (G-CNNs) have been successfully applied in geometric deep learning. Typically, G-CNNs have the advantage over CNNs that they do not waste network capacity on training symmetries that should have been hard-coded in the network. The recently introduced framework of PDE-based G-CNNs (PDE-G-CNNs) generalizes G-CNNs. PDE-G-CNNs have the core advantages that they simultaneously (1) reduce network complexity, (2) increase classification performance, and (3) provide geometric interpretability. Their implementations primarily consist of linear and morphological convolutions with kernels. In this paper, we show that the previously suggested approximative morphological kernels do not always accurately approximate the exact kernels accurately. More specifically, depending on the spatial anisotropy of the Riemannian metric, we argue that one must resort to sub-Riemannian approximations. We solve this problem by providing a new approximative kernel that works regardless of the anisotropy. We provide new theorems with better error estimates of the approximative kernels, and prove that they all carry the same reflectional symmetries as the exact ones. We test the effectiveness of multiple approximative kernels within the PDE-G-CNN framework on two datasets, and observe an improvement with the new approximative kernels. We report that the PDE-G-CNNs again allow for a considerable reduction of network complexity while having comparable or better performance than G-CNNs and CNNs on the two datasets. Moreover, PDE-G-CNNs have the advantage of better geometric interpretability over G-CNNs, as the morphological kernels are related to association fields from neurogeometry
- …